James M. Rehg

James M. Rehg

Founder Professor

University of Illinois Urbana-Champaign

Biography

James M. Rehg (pronounced β€œray”) is a Founder Professor of Computer Science and Industrial and Enterprise Systems Engineering at University of Illinois Urbana-Champaign. Previously, he was a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he co-Directed the Center for Health Analytics and Informatics (CHAI). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, Face and Gesture 2015, and a Distinguished Paper Award from ACM IMWUT and a Method of the Year award from the journal Nature Methods. Dr. Rehg served as the General co-Chair for CVPR 2009 and the Program co-Chair for CVPR 2017. He has authored more than 200 peer-reviewed scientific papers and holds 26 issued US patents.

Introduction to the Rehg Lab

We conduct basic research in computer vision and machine learning, and work in a number of interdisciplinary areas: developmental and social psychology, autism research, mobile health, and robotics. The study of human social and cognitive behavior is a cross-cutting theme. We are developing novel methods for measuring behavior in real-life settings, and computational models that connect health-related behaviors to health outcomes in order to enable novel forms of treatment. We are creating machine learning methods that are inspired by child development and investigating biologically-inspired approaches to robot navigation and control.

Prospective Students: If you are interested in joining our group and are not currently at UIUC, please apply directly to the university. For current/incoming UIUC students, please fill out this form.

labphoto

People


Principal Investigator

Avatar

James M. Rehg

Founder Professor


Lab Members

Avatar

Junho Kim

Postdoc

Avatar

Fiona Ryan

PhD Student

Avatar

Max Xu

PhD Student

Avatar

Bolin Lai

PhD Student

Avatar

Xiang Li

PhD Student

Avatar

Ozgur Kara

PhD Student

Avatar

Xu Cao

Ph.D. Student

Avatar

Wenqi Jia

PhD Student

Avatar

Ana Jojic

PhD Student

Avatar

Wanting Mao

PhD Student

Avatar

Bikram Boote

Ph.D. Student

Avatar

Nan Huang

PhD Student

Avatar

Vipin Gunda

PhD Student

Avatar

Brian Zhao

Ph.D. Student

Avatar

Zirui Wang

Master Student

Avatar

Pranav Virupaksha

Master Student

Avatar

Vedant Mahajan

Master Student

Avatar

Tarik Ozden

Master Student

Avatar

Sachidanand VS

Master Student

Avatar

Yifan Xu

Master Student


Alumni

Projects

AutoRally

Autonomous driving

Developmental Machine Learning

Developmental Machine Learning

Mobile and Computational Health

Mobile and Computational Health

Publications

Gaze-LLE: Gaze Target Estimation via Large-scale Learned Encoders

Gaze-LLE: Gaze Target Estimation via Large-scale Learned Encoders

CVPR 2025 (Highlight, Acceptance rate 3.0%)

Improving Personalized Search with Regularized Low-Rank Parameter Updates

Improving Personalized Search with Regularized Low-Rank Parameter Updates

CVPR 2025 (Highlight, Acceptance rate 3.0%)

ShotAdapter: Text-to-Multi-Shot Video Generation with Diffusion Models

ShotAdapter: Text-to-Multi-Shot Video Generation with Diffusion Models

CVPR 2025

SocialGesture: Delving into Multi-person Gesture Understanding

SocialGesture: Delving into Multi-person Gesture Understanding

CVPR 2025

SPAR3D: Stable Point-Aware Reconstruction of 3D Objects from Single Images

SPAR3D: Stable Point-Aware Reconstruction of 3D Objects from Single Images

CVPR 2025

Symmetry Strikes Back: From Single-Image Symmetry Detection to 3D Generation

Symmetry Strikes Back: From Single-Image Symmetry Detection to 3D Generation

CVPR 2025 (Highlight, Acceptance rate 3.0%)

Unleashing In-context Learning of Autoregressive Models for Few-shot Image Manipulation

Unleashing In-context Learning of Autoregressive Models for Few-shot Image Manipulation

CVPR 2025 (Highlight, Acceptance rate 3.0%)

RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

ICLR 2025

Leveraging Object Priors for Point Tracking

Leveraging Object Priors for Point Tracking

ECCV 2024 ILR Workshop

3x2: 3D Object Part Segmentation by 2D Semantic Correspondences

3x2: 3D Object Part Segmentation by 2D Semantic Correspondences

ECCV 2024

LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning

LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning

ECCV 2024 (Oral, Acceptance rate 2.3%, πŸ† Best Paper Finalist)

Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation

Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation

ECCV 2024

MAPLM: A Real-World Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding

MAPLM: A Real-World Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding

CVPR 2024

PointInfinity: Resolution-Invariant Point Diffusion Models

PointInfinity: Resolution-Invariant Point Diffusion Models

CVPR 2024

ZeroShape: Regression-based Zero-shot Shape Reconstruction

ZeroShape: Regression-based Zero-shot Shape Reconstruction

CVPR 2024

Datasets

Georgia Tech Egocentric Activity Datasets

Georgia Tech Egocentric Activity Datasets

Summary Text for GTEA dataset

Toys4K 3D Object Dataset

Toys4K 3D Object Dataset

CVPR 2021

4,000 3D object instances from 105 categories of developmentally plausible objects

Sponsors

NIH NIBIB P41-EB028242: mHealth Center for Discovery, Optimization, and Translation of Temporally-Precise Interventions (mDOT)

NSF OIA 1936970: C-Accel Phase 1: Empowering Neurodiverse Populations for Employment through Inclusion AI and Innovation Science

NSF CNS 1823201: CRI: mResearch: A platform for Reproducible and Extensible Mobile Sensor Big Data Research

NIH NIMH R01-MH114999: Data-Driven Multidimensional Modeling of Nonverbal Communication in Typical and Atypical Development

Contact